17 research outputs found

    Off-Line Camera-Based Calibration for Optical See-Through Head-Mounted Displays

    Get PDF
    In recent years, the entry into the market of self contained optical see-through headsets with integrated multi-sensor capabilities has led the way to innovative and technology driven augmented reality applications and has encouraged the adoption of these devices also across highly challenging medical and industrial settings. Despite this, the display calibration process of consumer level systems is still sub-optimal, particularly for those applications that require high accuracy in the spatial alignment between computer generated elements and a real-world scene. State-of-the-art manual and automated calibration procedures designed to estimate all the projection parameters are too complex for real application cases outside laboratory environments. This paper describes an off-line fast calibration procedure that only requires a camera to observe a planar pattern displayed on the see-through display. The camera that replaces the user’s eye must be placed within the eye-motion-box of the see-through display. The method exploits standard camera calibration and computer vision techniques to estimate the projection parameters of the display model for a generic position of the camera. At execution time, the projection parameters can then be refined through a planar homography that encapsulates the shift and scaling effect associated with the estimated relative translation from the old camera position to the current user’s eye position. Compared to classical SPAAM techniques that still rely on the human element and to other camera based calibration procedures, the proposed technique is flexible and easy to replicate in both laboratory environments and real-world settings

    Software Framework for Customized Augmented Reality Headsets in Medicine

    Get PDF
    The growing availability of self-contained and affordable augmented reality headsets such as the Microsoft HoloLens is encouraging the adoption of these devices also in the healthcare sector. However, technological and human-factor limitations still hinder their routine use in clinical practice. Among them, the major drawbacks are due to their general-purpose nature and to the lack of a standardized framework suited for medical applications and devoid of platform-dependent tracking techniques and/or complex calibration procedures. To overcome such limitations, in this paper we present a software framework that is designed to support the development of augmented reality applications for custom-made head-mounted displays designed to aid high-precision manual tasks. The software platform is highly configurable, computationally efficient, and it allows the deployment of augmented reality applications capable to support in situ visualization of medical imaging data. The framework can provide both optical and video see-through-based augmentations and it features a robust optical tracking algorithm. An experimental study was designed to assess the efficacy of the platform in guiding a simulated task of surgical incision. In the experiments, the user was asked to perform a digital incision task, with and without the aid of the augmented reality headset. The task accuracy was evaluated by measuring the similarity between the traced curve and the planned one. The average error in the augmented reality tests was < 1 mm. The results confirm that the proposed framework coupled with the new-concept headset may boost the integration of augmented reality headsets into routine clinical practice

    Optical See-Through Head-Mounted Displays With Short Focal Distance: Conditions for Mitigating Parallax-Related Registration Error

    Get PDF
    Optical see-through (OST) augmented reality head-mounted displays are quickly emerging as a key asset in several application fields but their ability to profitably assist high precision activities in the peripersonal space is still sub-optimal due to the calibration procedure required to properly model the user's viewpoint through the see-through display. In this work, we demonstrate the beneficial impact, on the parallax-related AR misregistration, of the use of optical see-through displays whose optical engines collimate the computer-generated image at a depth close to the fixation point of the user in the peripersonal space. To estimate the projection parameters of the OST display for a generic viewpoint position, our strategy relies on a dedicated parameterization of the virtual rendering camera based on a calibration routine that exploits photogrammetry techniques. We model the registration error due to the viewpoint shift and we validate it on an OST display with short focal distance. The results of the tests demonstrate that with our strategy the parallax-related registration error is submillimetric provided that the scene under observation stays within a suitable view volume that falls in a ±10 cm depth range around the focal plane of the display. This finding will pave the way to the development of new multi-focal models of OST HMDs specifically conceived to aid high-precision manual tasks in the peripersonal space

    Augmented Reality-Assisted Craniotomy for Parasagittal and Convexity En Plaque Meningiomas and Custom-Made Cranio-Plasty: A Preliminary Laboratory Report

    Get PDF
    Background: This report discusses the utility of a wearable augmented reality platform in neurosurgery for parasagittal and convexity en plaque meningiomas with bone flap removal and custom-made cranioplasty. Methods: A real patient with en plaque cranial vault meningioma with diffuse and extensive dural involvement, extracranial extension into the calvarium, and homogeneous contrast enhancement on gadolinium-enhanced T1-weighted MRI, was selected for this case study. A patient-specific manikin was designed starting with the segmentation of the patient’s preoperative MRI images to simulate a craniotomy procedure. Surgical planning was performed according to the segmented anatomy, and customized bone flaps were designed accordingly. During the surgical simulation stage, the VOSTARS head-mounted display was used to accurately display the planned craniotomy trajectory over the manikin skull. The precision of the craniotomy was assessed based on the evaluation of previously prepared custom-made bone flaps. Results: A bone flap with a radius 0.5 mm smaller than the radius of an ideal craniotomy fitted perfectly over the performed craniotomy, demonstrating an error of less than ±1 mm in the task execution. The results of this laboratory-based experiment suggest that the proposed augmented reality platform helps in simulating convexity en plaque meningioma resection and custom-made cranioplasty, as carefully planned in the preoperative phase. Conclusions: Augmented reality head-mounted displays have the potential to be a useful adjunct in tumor surgical resection, cranial vault lesion craniotomy and also skull base surgery, but more study with large series is needed

    Hybrid Simulation and Planning Platform for Cryosurgery with Microsoft HoloLens

    Get PDF
    Cryosurgery is a technique of growing popularity involving tissue ablation under controlled freezing. Technological advancement of devices along with surgical technique improvements have turned cryosurgery from an experimental to an established option for treating several diseases. However, cryosurgery is still limited by inaccurate planning based primarily on 2D visualization of the patient's preoperative images. Several works have been aimed at modelling cryoablation through heat transfer simulations; however, most software applications do not meet some key requirements for clinical routine use, such as high computational speed and user-friendliness. This work aims to develop an intuitive platform for anatomical understanding and pre-operative planning by integrating the information content of radiological images and cryoprobe specifications either in a 3D virtual environment (desktop application) or in a hybrid simulator, which exploits the potential of the 3D printing and augmented reality functionalities of Microsoft HoloLens. The proposed platform was preliminarily validated for the retrospective planning/simulation of two surgical cases. Results suggest that the platform is easy and quick to learn and could be used in clinical practice to improve anatomical understanding, to make surgical planning easier than the traditional method, and to strengthen the memorization of surgical planning

    Wearable AR and 3D Ultrasound: Towards a Novel Way to Guide Surgical Dissections

    Get PDF
    Nowadays, ultrasound (US) is increasingly being chosen as imaging modality for both diagnostic and interventional applications, owing to its positive characteristics in terms of safety, low footprint, and low cost. The combination of this imaging modality with wearable augmented reality (AR) systems, such as the head-mounted displays (HMD), comes forward as a breakthrough technological solution, as it allows for hands-free interaction with the augmented scene, which is an essential requirement for the execution of high-precision manual tasks, such as in surgery. What we propose in this study is the integration of an AR navigation system (HMD plus dedicated platform) with a 3D US imaging system to guide a dissection task that requires maintaining safety margins with respect to unexposed anatomical or pathological structures. For this purpose, a standard scalpel was sensorized to provide real-time feedback on the position of the instrument during the execution of the task. The accuracy of the system was quantitatively assessed with two different experimental studies: a targeting experiment, which revealed a median error of 2.53 mm in estimating the scalpel to target distance, and a preliminary user study simulating a dissection task that requires reaching a predefined distance to an occult lesion. The second experiment results showed that the system can be used to guide a dissection task with a mean accuracy of 0.65 mm, with a mean angular error between the ideal and actual cutting plane of 2.07°. The results encourage further studies to fully exploit the potential of wearable AR and intraoperative US imaging to accurately guide deep surgical tasks, such as to guide the excision of non-palpable breast tumors ensuring optimal margin clearance

    Come mitigare le problematiche percettive dovute ad aberrazioni geometriche in visori per realtĂ  aumentata Video See-Through durante attivitĂ  manuali

    No full text
    L’ambito della tesi è inerente lo sviluppo di visori stereoscopici indossabili per realtà aumentata Video See-Through (VST), da utilizzare per guidare in particolare attività manuali. L’obiettivo è identificare e mitigare le problematiche percettive causate da aberrazioni geometriche, inevitabilmente presenti nelle implementazioni reali di detti sistemi, al fine di fornire agli occhi dell’utente stimoli quanto più possibile coerenti con la vista ad occhio nudo della medesima scena, per permettere un’efficace e confortevole visione stereoscopica e possibilmente anche una convergenza oculare realistica in funzione del punto di fissazione. L’analisi delle soluzioni proposte allo stato dell’arte, unita alla necessità di identificare le aberrazioni geometriche che maggiormente inficiano la percezione della profondità nei sistemi VST, ha portato allo sviluppo di uno studio geometrico preliminare del problema. Tale studio è stato condotto tramite simulazione di un visore ideale, in letteratura definito “parallax-free”, che prevede la coincidenza fisica, ottenuta tramite l’utilizzo di opportuni specchi, degli assi ottici e dei centri di proiezione del sistema di acquisizione delle immagini (camere), del sistema di proiezione (display) e del sistema di visualizzazione (occhi dell’utente). Purché tale sistema sia ideale e non implementabile in toto, la sua analisi in varie configurazioni, ha permesso di identificare gli aspetti da tener presenti per l’implementazione di un sistema reale. Ciò ha permesso di limitare il confronto delle possibili configurazioni di visori reali utilizzando un simulatore sviluppato ad hoc. Nelle varie configurazioni, basate sull’utilizzo di un hardware sub-ottimale (non ideale), è stata simulata l’applicazione di una trasformazione geometrica definita omografia, che consente di riportarsi al caso ideale, quantomeno su un piano a distanza nota. Dalle prove effettuate con il simulatore è emerso che grazie all’applicazione dell’omografia è possibile semplificare l’implementazione dei visori evitando di dover movimentare camere e display in funzione della distanza di lavoro. I risultati ottenuti con il simulatore sono stati validati da prove sperimentali eseguite con un visore sperimentale con camere e display mobili. Dalle simulazioni emerge inoltre che l’utilizzo di sistemi di eye tracking, opportunamente unito all’omografia, potrebbe permettere di avvicinarsi ancor più alla vista ad occhio nudo. A tal fine è stato fatto uno studio di fattibilità per integrare una camera per eye tracking con il visore

    Integration of Augmented Reality Head-Mounted Display and 3D Ultrasound for the In Situ Visualisation of Ultrasound-Guided Interventions

    No full text
    This thesis proposes the integration between a 3D ultrasound imaging system and a wearable augmented reality video see-through head-mounted display with an associated platform. The idea is to use the integrated system as guidance during in-depth high-precision manual tasks, such as targeting interventions (namely, biopsies, dissections, etc.). The integrated system can show the 3D model of the target (derived from the volumetric ultrasound acquisition), along with the additive virtual graphical aid specifically designed to suit the procedure to be guided, both superimposed and registered over the patient’s anatomy. The aim is to exploit the virtual guidance to improve performance accuracy. The proposed integrated system has the potential to be used as a navigation system for the guidance of ultrasound-guided interventions. Moreover, due to its ability in simplifying the anatomical three-dimensional understanding and spatial coordination and orientation of inexperienced users, it could profitably be used as a learning and training tool for new operators approaching this discipline

    Closed - Loop Calibration for Optical See-Through near Eye Display with Infinity Focus

    No full text
    In wearable augmented reality systems, optical see-through near-eye displays (OST NEDs) based on waveguides are becoming a standard as they are generally preferred over solutions based on semi-reflective curved mirrors. This is mostly due to their ability to ensure reduced image distortion and sufficiently wide eye motion box without the need for bulky optical and electronics components to be placed in front of the user's face and/or onto the user's line of sight. In OST head-mounted displays (HMDs) the user's own view is augmented by optically combining it with the virtual content rendered on a two-dimensional (2D) microdisplay. For achieving a perfect combination of the light field in the real 3D world and the computer-generated 2D graphics projected on the display, an accurate alignment between real and virtual content must be yielded at the level of the NED imaging plane. To this end, we must know the exact position of the user's eyes within the HMD reference system. State-of-the-art methods models the eye-NED system as an off-axis pinhole camera model, and therefore include the contribution of the eyes' positions into the modelling of the intrinsic matrix of the eye-NED. In this paper, we will describe a method for robustly calibrating OST NEDs that explicitly ignore this assumption. To verify the accuracy of our method, we conducted a set of experiments in a setup comprising a commercial waveguide-based OST NED and a camera in place of the user's eye. We tested a set of different camera (or eye) positions within the eye box of the NED. The obtained results demonstrate that the proposed method yields accurate results in terms of real-to-virtual alignment, regardless of the position of the eyes within the eye box of the NEDs (Figure 1). The achieved viewing accuracy was of .85 pm 1.37$ pixels

    Parallax Free Registration for Augmented Reality OpticalSee-through Displays in the Peripersonal Space

    No full text
    Egocentric augmented reality (AR) interfaces are quickly becoming a key asset for assisting high precision activities in the peripersonal space in several application fields. In these applications, accurate and robust registration of computer-generated information to the real scene is hard to achieve with traditional Optical See-Through (OST) displays given that it relies on the accurate calibration of the combined eye-display projection model. The calibration is required to efficiently estimate the projection parameters of the pinhole model that encapsulate the optical features of the display and whose values vary according to the position of the user's eye. In this work, we describe an approach that prevents any parallax-related AR misregistration at a pre-defined working distance in OST displays with infinity focus; our strategy relies on the use of a magnifier placed in front of the OST display, and features a proper parameterization of the virtual rendering camera achieved through a dedicated calibration procedure that accounts for the contribution of the magnifier. We model the registration error due to the viewpoint parallax outside the ideal working distance. Finally, we validate our strategy on a OST display, and we show that sub-millimetric registration accuracy can be achieved for working distances of ± 100 mm around the focal length of the magnifier
    corecore